产量估计是葡萄园管理中的强大工具,因为它允许种植者微调实践以优化产量和质量。但是,目前使用手动抽样进行估计,这是耗时和不精确的。这项研究表明,近端成像的应用与深度学习相结合,以进行葡萄园中的产量估计。使用车辆安装的传感套件进行连续数据收集,并使用商业收益率监控器在收获时结合了地面真实收益数据的收集,可以生成一个23,581个收益点和107,933张图像的大数据集。此外,这项研究是在机械管理的商业葡萄园中进行的,代表了一个充满挑战的图像分析环境,但在加利福尼亚中央山谷中的一组常见条件。测试了三个模型架构:对象检测,CNN回归和变压器模型。对象检测模型在手工标记的图像上进行了训练以定位葡萄束,并将束数量或像素区域求和以与葡萄产量相关。相反,回归模型端到端训练,以预测图像数据中的葡萄产量,而无需手动标记。结果表明,在代表性的保留数据集上,具有相当的绝对百分比误差为18%和18.5%的变压器和具有像素区域处理的对象检测模型。使用显着映射来证明CNN模型的注意力位于葡萄束的预测位置附近以及葡萄树冠的顶部。总体而言,该研究表明,近端成像和深度学习对于大规模预测葡萄群的适用性。此外,端到端建模方法能够与对象检测方法相当地执行,同时消除了手工标记的需求。
translated by 谷歌翻译
Since early in the coronavirus disease 2019 (COVID-19) pandemic, there has been interest in using artificial intelligence methods to predict COVID-19 infection status based on vocal audio signals, for example cough recordings. However, existing studies have limitations in terms of data collection and of the assessment of the performances of the proposed predictive models. This paper rigorously assesses state-of-the-art machine learning techniques used to predict COVID-19 infection status based on vocal audio signals, using a dataset collected by the UK Health Security Agency. This dataset includes acoustic recordings and extensive study participant meta-data. We provide guidelines on testing the performance of methods to classify COVID-19 infection status based on acoustic features and we discuss how these can be extended more generally to the development and assessment of predictive methods based on public health datasets.
translated by 谷歌翻译
We introduce the MAsked Generative VIdeo Transformer, MAGVIT, to tackle various video synthesis tasks with a single model. We introduce a 3D tokenizer to quantize a video into spatial-temporal visual tokens and propose an embedding method for masked video token modeling to facilitate multi-task learning. We conduct extensive experiments to demonstrate the quality, efficiency, and flexibility of MAGVIT. Our experiments show that (i) MAGVIT performs favorably against state-of-the-art approaches and establishes the best-published FVD on three video generation benchmarks, including the challenging Kinetics-600. (ii) MAGVIT outperforms existing methods in inference time by two orders of magnitude against diffusion models and by 60x against autoregressive models. (iii) A single MAGVIT model supports ten diverse generation tasks and generalizes across videos from different visual domains. The source code and trained models will be released to the public at https://magvit.cs.cmu.edu.
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
As an autonomous system performs a task, it should maintain a calibrated estimate of the probability that it will achieve the user's goal. If that probability falls below some desired level, it should alert the user so that appropriate interventions can be made. This paper considers settings where the user's goal is specified as a target interval for a real-valued performance summary, such as the cumulative reward, measured at a fixed horizon $H$. At each time $t \in \{0, \ldots, H-1\}$, our method produces a calibrated estimate of the probability that the final cumulative reward will fall within a user-specified target interval $[y^-,y^+].$ Using this estimate, the autonomous system can raise an alarm if the probability drops below a specified threshold. We compute the probability estimates by inverting conformal prediction. Our starting point is the Conformalized Quantile Regression (CQR) method of Romano et al., which applies split-conformal prediction to the results of quantile regression. CQR is not invertible, but by using the conditional cumulative distribution function (CDF) as the non-conformity measure, we show how to obtain an invertible modification that we call \textbf{P}robability-space \textbf{C}onformalized \textbf{Q}uantile \textbf{R}egression (PCQR). Like CQR, PCQR produces well-calibrated conditional prediction intervals with finite-sample marginal guarantees. By inverting PCQR, we obtain marginal guarantees for the probability that the cumulative reward of an autonomous system will fall within an arbitrary user-specified target intervals. Experiments on two domains confirm that these probabilities are well-calibrated.
translated by 谷歌翻译
在不失去先前学习的情况下学习新任务和技能(即灾难性遗忘)是人为和生物神经网络的计算挑战,但是人工系统努力与其生物学类似物达成平等。哺乳动物的大脑采用众多神经手术来支持睡眠期间的持续学习。这些是人工适应的成熟。在这里,我们研究了建模哺乳动物睡眠的三个不同组成部分如何影响人工神经网络中的持续学习:(1)在非比型眼运动(NREM)睡眠期间观察到的垂直记忆重播过程; (2)链接到REM睡眠的生成记忆重播过程; (3)已提出的突触降压过程,以调整信噪比和支持神经保养。在评估持续学习CIFAR-100图像分类基准上的性能时,我们发现将所有三个睡眠组件的包含在内。在以后的任务期间,训练和灾难性遗忘在训练过程中提高了最高准确性。尽管某些灾难性遗忘在网络培训过程中持续存在,但更高水平的突触缩减水平会导致更好地保留早期任务,并进一步促进随后培训期间早期任务准确性的恢复。一个关键的要点是,在考虑使用突触缩小范围的水平时,手头有一个权衡 - 更具侵略性的缩减更好地保护早期任务,但较少的缩减可以增强学习新任务的能力。中级水平可以在训练过程中与最高的总体精度达到平衡。总体而言,我们的结果都提供了有关如何适应睡眠组件以增强人工连续学习系统的洞察力,并突出了未来神经科学睡眠研究的领域,以进一步进一步进行此类系统。
translated by 谷歌翻译
引入后二十年多,退火重要性采样(AIS)仍然是边际可能性估计的最有效方法之一。它依赖于一系列分布序列在可聊天的初始分布和利益的目标分布之间插值,我们从大约使用非均匀的马尔可夫链中模拟了分布。为了获得边际可能性的重要性采样估计,AIS引入了扩展的目标分布,以重新持续马尔可夫链提案。尽管已经大量努力通过更改AIS使用的提案分布,通过更改中间分布和相应的马尔可夫内核,但不被评估的问题是AIS使用方便但次优的扩展目标分布。这可能会阻碍其性能。我们在这里利用基于分数的生成建模(SGM)的最新进展来近似与Langevin和Hamiltonian Dynamics离散化相对应的AIS建议的最佳扩展目标分布。我们在许多合成基准分布和变异自动编码器上展示了这些新颖的,可区分的AIS程序。
translated by 谷歌翻译
具有隐式函数的单视RGB-D人重建通常以每点分类为例。具体而言,首先将相机视图中的一组3D位置投影到图像上,并随后针对每个3D位置提取相应的功能。然后,每个3D位置的特征用于独立分类,无论相应的3D点在观察到的对象内还是外部。此过程导致了亚最佳结果,因为仅通过提取的特征隐式地考虑了相邻位置的预测之间的相关性。为了获得更准确的结果,我们提出了占用平面(OPLANES)表示,该表示可以使单视RGB-D人类重建作为对平面上的占用预测,这些预测切成摄像机的视图。这种表示比体素电网提供了更大的灵活性,并使比每点分类更好地利用相关性。在具有挑战性的S3D数据上,我们观察一个基于Oplanes表示的简单分类器,以产生引人注目的结果,尤其是在由于其他对象和部分可见性引起的部分遮挡的困难情况下,这尚未通过先前的工作解决。
translated by 谷歌翻译
尽管从图像和视频数据中恢复几何形状在计算机视觉中受到了很多关注,但捕获给定几何形状纹理的方法不那么成熟。具体而言,纹理生成的经典方法通常假设干净的几何形状和合理的一致图像数据。尽管最近的方法,例如,对抗性纹理优化,更好地处理从手持设备获得的低质量数据,但我们发现它们仍然经常挣扎。为了提高鲁棒性,特别是最近的对抗性纹理优化,我们开发了明确的初始化和一个对齐程序。由于将几何形状绘制到纹理图和基于硬分配的初始化,因此它处理了复杂的几何形状。它通过将快速的图像对齐整合到纹理细化优化中来处理几何和图像的错位。我们在11个场景的数据集中证明了纹理生成的功效,总共有2807帧,观察7.8%和11.1%的感知和清晰度测量值相对改善。
translated by 谷歌翻译
图形神经网络(GNN)正在化学工程中出现,以基于分子图的物理化学特性端到端学习。 GNNS的一个关键要素是合并函数,将原子矢量结合到分子指纹中。大多数以前的作品都使用标准池功能来预测各种属性。但是,不合适的合并功能会导致概括不佳的非物理GNN。我们根据有关学习特性的物理知识比较并选择有意义的GNN合并方法。通过量子机械计算计算出的分子特性证明了物理池函数的影响。我们还将结果与最近的SET2Set合并方法进行了比较。我们建议使用总和池来预测取决于分子大小的性能并比较分子大小无关的属性的池函数。总体而言,我们表明物理池功能的使用显着增强了概括。
translated by 谷歌翻译